WebMCP is a new technology that allows AI agents to interact with web pages more directly. It works by turning web pages into MCP (Model Context Protocol) servers via a Chrome extension. This enables agents to understand and manipulate web content in a structured way, potentially improving efficiency and user experience.
The technology, backed by Google and Microsoft, is designed to work alongside human users, allowing them to ask agents questions about the page they are viewing. WebMCP uses a Declarative API for standard actions and an Imperative API for more complex tasks. Early experiments demonstrate the ability to query web pages and receive structured data back.
CLI-Anything bridges the gap between AI agents and the world's software by making any software agent-ready. It's a universal interface for both humans and AI, offering a structured, lightweight, and self-describing approach. The project automates the creation of CLIs for applications like GIMP, Blender, and LibreOffice through a 7-phase pipeline – analyzing code, designing command groups, implementing the CLI, planning tests, writing tests, documenting, and publishing. It supports multiple platforms including Claude Code, OpenClaw, and Codex, with a focus on authentic software integration and production-grade testing.
This article discusses the recent wave of AI-driven layoffs in the tech industry, with companies like Atlassian and Block citing AI automation as a key reason. It explores the growing debate between the Model Context Protocol (MCP) and APIs for connecting AI agents, with some developers favoring APIs for their simplicity and efficiency. The piece also highlights the increasing trend of using Mac Minis as dedicated hosts for AI agents, and the rapid growth of platforms like Replit and Claude, indicating a shift in how software is developed and deployed with the aid of AI.
Microsoft's Phi-4-Reasoning-Vision-15B model challenges the trend of ever-larger AI models by demonstrating strong reasoning capabilities with a comparatively compact size. Trained on curated reasoning data, it aims to achieve performance without the massive compute costs associated with frontier models. The model supports multimodal tasks, combining text and image understanding, and offers flexible reasoning modes for different workloads. This research highlights the importance of data quality and training strategy, suggesting that smarter training techniques can be as impactful as simply increasing model size, particularly for AI agents and practical deployments.
This article presents findings from a survey of over 900 software engineers regarding their use of AI tools. Key findings include the dominance of Claude Code, the mainstream adoption of AI in software engineering (95% weekly usage), the increasing use of AI agents (especially among staff+ engineers), and the influence of company size on tool choice. The survey also reveals which tools engineers love, with Claude Code being particularly favored, and provides demographic information about the respondents. A longer, 35-page report with additional details is available for full subscribers.
Superhuman announced the expansion of Superhuman Go’s AI agent ecosystem with new partner agents from Box, Gamma, and Wayground. These agents bring specialized capabilities like visual content creation and enterprise document access to users within the tools they rely on every day, accelerating the growth of Superhuman Go’s open agent platform.
This article explores how agentic AI can revolutionize deep learning experimentation by automating tasks like hyperparameter tuning, architecture search, and data augmentation. It delves into the core concepts, benefits, and practical considerations of using agentic systems to accelerate and improve the deep learning workflow.
NIST is launching a new project around standards for artificial intelligence agents, seeking feedback on the secure use of the rapidly evolving technology. The initiative focuses on security concerns arising from the autonomous nature of AI agents and aims to foster interoperability and public trust. It includes a request for information on AI agent security and a draft concept paper on software and AI agent identity and authorization.
A comprehensive overview of the current state of Multi-Concept Prompting (MCP), including advancements, challenges, and future directions.
The first-ever malicious Model-Context-Prompt (MCP) server, a trojanized npm package named `postmark-mcp`, has been discovered exfiltrating sensitive data from users’ emails. The package copied every email processed to a server controlled by the attacker.